Goto

Collaborating Authors

 ai fairness


"I think this is fair'': Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment

Luo, Lin, Nakao, Yuri, Chollet, Mathieu, Inakoshi, Hiroya, Stumpf, Simone

arXiv.org Artificial Intelligence

Assessing fairness in artificial intelligence (AI) typically involves AI experts who select protected features, fairness metrics, and set fairness thresholds. However, little is known about how stakeholders, particularly those affected by AI outcomes but lacking AI expertise, assess fairness. To address this gap, we conducted a qualitative study with 30 stakeholders without AI expertise, representing potential decision subjects in a credit rating scenario, to examine how they assess fairness when placed in the role of deciding on features with priority, metrics, and thresholds. We reveal that stakeholders' fairness decisions are more complex than typical AI expert practices: they considered features far beyond legally protected features, tailored metrics for specific contexts, set diverse yet stricter fairness thresholds, and even preferred designing customized fairness. Our results extend the understanding of how stakeholders can meaningfully contribute to AI fairness governance and mitigation, underscoring the importance of incorporating stakeholders' nuanced fairness judgments.


A five-layer framework for AI governance: integrating regulation, standards, and certification

Agarwal, Avinash, Nene, Manisha J.

arXiv.org Artificial Intelligence

Purpose: The governance of artificial iintelligence (AI) systems requires a structured approach that connects high-level regulatory principles with practical implementation. Existing frameworks lack clarity on how regulations translate into conformity mechanisms, leading to gaps in compliance and enforcement. This paper addresses this critical gap in AI governance. Methodology/Approach: A five-layer AI governance framework is proposed, spanning from broad regulatory mandates to specific standards, assessment methodologies, and certification processes. By narrowing its scope through progressively focused layers, the framework provides a structured pathway to meet technical, regulatory, and ethical requirements. Its applicability is validated through two case studies on AI fairness and AI incident reporting. Findings: The case studies demonstrate the framework's ability to identify gaps in legal mandates, standardization, and implementation. It adapts to both global and region-specific AI governance needs, mapping regulatory mandates with practical applications to improve compliance and risk management. Practical Implications - By offering a clear and actionable roadmap, this work contributes to global AI governance by equipping policymakers, regulators, and industry stakeholders with a model to enhance compliance and risk management. Social Implications: The framework supports the development of policies that build public trust and promote the ethical use of AI for the benefit of society. Originality/Value: This study proposes a five-layer AI governance framework that bridges high-level regulatory mandates and implementation guidelines. Validated through case studies on AI fairness and incident reporting, it identifies gaps such as missing standardized assessment procedures and reporting mechanisms, providing a structured foundation for targeted governance measures.


BiMi Sheets: Infosheets for bias mitigation methods

Defrance, MaryBeth, Bied, Guillaume, Buyl, Maarten, Lijffijt, Jefrey, De Bie, Tijl

arXiv.org Artificial Intelligence

Over the past 15 years, hundreds of bias mitigation methods have been proposed in the pursuit of fairness in machine learning (ML). However, algorithmic biases are domain-, task-, and model-specific, leading to a `portability trap': bias mitigation solutions in one context may not be appropriate in another. Thus, a myriad of design choices have to be made when creating a bias mitigation method, such as the formalization of fairness it pursues, and where and how it intervenes in the ML pipeline. This creates challenges in benchmarking and comparing the relative merits of different bias mitigation methods, and limits their uptake by practitioners. We propose BiMi Sheets as a portable, uniform guide to document the design choices of any bias mitigation method. This enables researchers and practitioners to quickly learn its main characteristics and to compare with their desiderata. Furthermore, the sheets' structure allow for the creation of a structured database of bias mitigation methods. In order to foster the sheets' adoption, we provide a platform for finding and creating BiMi Sheets at bimisheet.com.


FairSense-AI: Responsible AI Meets Sustainability

Raza, Shaina, Chettiar, Mukund Sayeeganesh, Yousefabadi, Matin, Khan, Tahniat, Lotif, Marcelo

arXiv.org Artificial Intelligence

In this paper, we introduce FairSense-AI: a multimodal framework designed to detect and mitigate bias in both text and images. By leveraging Large Language Models (LLMs) and Vision-Language Models (VLMs), FairSense-AI uncovers subtle forms of prejudice or stereotyping that can appear in content, providing users with bias scores, explanatory highlights, and automated recommendations for fairness enhancements. In addition, FairSense-AI integrates an AI risk assessment component that aligns with frameworks like the MIT AI Risk Repository and NIST AI Risk Management Framework, enabling structured identification of ethical and safety concerns. The platform is optimized for energy efficiency via techniques such as model pruning and mixed-precision computation, thereby reducing its environmental footprint. Through a series of case studies and applications, we demonstrate how FairSense-AI promotes responsible AI use by addressing both the social dimension of fairness and the pressing need for sustainability in large-scale AI deployments. https://vectorinstitute.github.io/FairSense-AI, https://pypi.org/project/fair-sense-ai/ (Sustainability , Responsible AI , Large Language Models , Vision Language Models , Ethical AI , Green AI)


EARN Fairness: Explaining, Asking, Reviewing and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders

Luo, Lin, Nakao, Yuri, Chollet, Mathieu, Inakoshi, Hiroya, Stumpf, Simone

arXiv.org Artificial Intelligence

Numerous fairness metrics have been proposed and employed by artificial intelligence (AI) experts to quantitatively measure bias and define fairness in AI models. Recognizing the need to accommodate stakeholders' diverse fairness understandings, efforts are underway to solicit their input. However, conveying AI fairness metrics to stakeholders without AI expertise, capturing their personal preferences, and seeking a collective consensus remain challenging and underexplored. To bridge this gap, we propose a new framework, EARN Fairness, which facilitates collective metric decisions among stakeholders without requiring AI expertise. The framework features an adaptable interactive system and a stakeholder-centered EARN Fairness process to Explain fairness metrics, Ask stakeholders' personal metric preferences, Review metrics collectively, and Negotiate a consensus on metric selection. To gather empirical results, we applied the framework to a credit rating scenario and conducted a user study involving 18 decision subjects without AI knowledge. We identify their personal metric preferences and their acceptable level of unfairness in individual sessions. Subsequently, we uncovered how they reached metric consensus in team sessions. Our work shows that the EARN Fairness framework enables stakeholders to express personal preferences and reach consensus, providing practical guidance for implementing human-centered AI fairness in high-risk contexts. Through this approach, we aim to harmonize fairness expectations of diverse stakeholders, fostering more equitable and inclusive AI fairness.


Towards Clinical AI Fairness: Filling Gaps in the Puzzle

Liu, Mingxuan, Ning, Yilin, Teixayavong, Salinelat, Liu, Xiaoxuan, Mertens, Mayli, Shang, Yuqing, Li, Xin, Miao, Di, Xu, Jie, Ting, Daniel Shu Wei, Cheng, Lionel Tim-Ee, Ong, Jasmine Chiat Ling, Teo, Zhen Ling, Tan, Ting Fang, RaviChandran, Narrendar, Wang, Fei, Celi, Leo Anthony, Ong, Marcus Eng Hock, Liu, Nan

arXiv.org Artificial Intelligence

The ethical integration of Artificial Intelligence (AI) in healthcare necessitates addressing fairness--a concept that is highly context-specific across medical fields. Extensive studies have been conducted to expand the technical components of AI fairness, while tremendous calls for AI fairness have been raised from healthcare. Despite this, a significant disconnect persists between technical advancements and their practical clinical applications, resulting in a lack of contextualized discussion of AI fairness in clinical settings. Through a detailed evidence gap analysis, our review systematically pinpoints several deficiencies concerning both healthcare data and the provided AI fairness solutions. We highlight the scarcity of research on AI fairness in many medical domains where AI technology is increasingly utilized. Additionally, our analysis highlights a substantial reliance on group fairness, aiming to ensure equality among demographic groups from a macro healthcare system perspective; in contrast, individual fairness, focusing on equity at a more granular level, is frequently overlooked. To bridge these gaps, our review advances actionable strategies for both the healthcare and AI research communities. Beyond applying existing AI fairness methods in healthcare, we further emphasize the importance of involving healthcare professionals to refine AI fairness concepts and methods to ensure contextually relevant and ethically sound AI applications in healthcare.


Inherent Limitations of AI Fairness

Communications of the ACM

AI fairness should not be considered a panacea: It may have the potential to make society more fair than ever, but it needs critical thought and outside help to make it happen.


Exploring the Impact of Lay User Feedback for Improving AI Fairness

Taka, Evdoxia, Nakao, Yuri, Sonoda, Ryosuke, Yokota, Takuya, Luo, Lin, Stumpf, Simone

arXiv.org Artificial Intelligence

Fairness in AI is a growing concern for high-stakes decision making. Engaging stakeholders, especially lay users, in fair AI development is promising yet overlooked. Recent efforts explore enabling lay users to provide AI fairness-related feedback, but there is still a lack of understanding of how to integrate users' feedback into an AI model and the impacts of doing so. To bridge this gap, we collected feedback from 58 lay users on the fairness of a XGBoost model trained on the Home Credit dataset, and conducted offline experiments to investigate the effects of retraining models on accuracy, and individual and group fairness. Our work contributes baseline results of integrating user fairness feedback in XGBoost, and a dataset and code framework to bootstrap research in engaging stakeholders in AI fairness. Our discussion highlights the challenges of employing user feedback in AI fairness and points the way to a future application area of interactive machine learning.


Kantian Deontology Meets AI Alignment: Towards Morally Robust Fairness Metrics

Mougan, Carlos, Brand, Joshua

arXiv.org Artificial Intelligence

Deontological ethics, specifically understood through Immanuel Kant, provides a moral framework that emphasizes the importance of duties and principles, rather than the consequences of action. Understanding that despite the prominence of deontology, it is currently an overlooked approach in fairness metrics, this paper explores the compatibility of a Kantian deontological framework in fairness metrics, part of the AI alignment field. We revisit Kant's critique of utilitarianism, which is the primary approach in AI fairness metrics and argue that fairness principles should align with the Kantian deontological framework. By integrating Kantian ethics into AI alignment, we not only bring in a widely-accepted prominent moral theory but also strive for a more morally grounded AI landscape that better balances outcomes and procedures in pursuit of fairness and justice.


Factoring the Matrix of Domination: A Critical Review and Reimagination of Intersectionality in AI Fairness

Ovalle, Anaelia, Subramonian, Arjun, Gautam, Vagrant, Gee, Gilbert, Chang, Kai-Wei

arXiv.org Artificial Intelligence

These notions vary across conceptualization Intersectionality is a critical framework that, through inquiry and (e.g., group, individual fairness [8]) and operationalization (e.g., praxis, allows us to examine how social inequalities persist through pre/in/post-processing [2]) [54]; nevertheless, the literature generally domains of structure and discipline. Given AI fairness' raison d'être agrees on the goal of minimizing negative outcomes across of "fairness," we argue that adopting intersectionality as an analytical demographic groups, including groups associated with multiple, framework is pivotal to effectively operationalizing fairness. "intersectional" demographic attributes (e.g., Black women) [92]. Through a critical review of how intersectionality is discussed in However, Kong [66] observes that AI fairness papers often narrowly 30 papers from the AI fairness literature, we deductively and inductively: interpret intersectional subgroup fairness as intersectionality, the 1) map how intersectionality tenets operate within the critical framework from which the term originates [29, 67]. This AI fairness paradigm and 2) uncover gaps between the conceptualization myopic conceptualization of intersectionality has non-trivial consequences and operationalization of intersectionality. We find that for just AI design and epistemology (i.e., ways of knowing).